child sexual abuse
Sex-Fantasy Chatbots Are Leaking a Constant Stream of Explicit Messages
Several AI chatbots designed for fantasy and sexual role-playing conversations are leaking user prompts to the web in almost real time, new research seen by WIRED shows. Some of the leaked data shows people creating conversations detailing child sexual abuse, according to the research. Conversations with generative AI chatbots are near instantaneous--you type a prompt and the AI responds. If the systems are configured improperly, however, this can lead to chats being exposed. In March, researchers at the security firm UpGuard discovered around 400 exposed AI systems while scanning the web looking for misconfigurations.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.82)
- Law > Family Law (0.58)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.43)
New laws close gap in California on deepfake child pornography
Using an AI-powered app to create fake nude pictures of people without their consent violates all sorts of norms, especially when those people are minors. It would not, however, violate California law -- yet. A pair of bills newly signed by Gov. Gavin Newsom outlaw the creation, possession and distribution of sexually charged images of minors even when they're created with computers, not cameras. The measures take effect Jan. 1. The expansion of state prohibitions comes as students are increasingly being victimized by apps that use artificial intelligence either to take a photo of a fully clothed real person and digitally generate a nude body ("undresser" apps) or seamlessly superimpose the image of a person's face onto a nude body from a pornographic video.
- North America > United States > California > Ventura County (0.06)
- North America > United States > California > Los Angeles County > Beverly Hills (0.06)
- North America > United States > Pennsylvania (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law > Criminal Law (0.92)
- Education > Educational Setting > K-12 Education (0.34)
AI is overpowering efforts to catch child predators, experts warn
The volume of sexually explicit images of children being generated by predators using artificial intelligence is overwhelming law enforcement's capabilities to identify and rescue real-life victims, child safety experts warn. Prosecutors and child safety groups working to combat crimes against children say AI-generated images have become so lifelike that in some cases it is difficult to determine whether real children have been subjected to real harms for their production. A single AI model can generate tens of thousands of new images in a short amount of time, and this content has begun to flood both the dark web and seep into the mainstream internet. "We are starting to see reports of images that are of a real child but have been AI-generated, but that child was not sexually abused. But now their face is on a child that was abused," said Kristina Korobov, senior attorney at the Zero Abuse Project, a Minnesota-based child safety non-profit.
- North America > United States > Minnesota (0.25)
- North America > United States > Washington (0.05)
- North America > United States > California > Los Angeles County > Los Angeles (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
AI image generators trained on pictures of child sexual abuse, study finds
Hidden inside the foundation of popular artificial intelligence (AI) image generators are thousands of images of child sexual abuse, according to new research published on Wednesday. The operators of some of the largest and most-used sets of images utilized to train AI shut off access to them in response to the study. The Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that's been used to train leading AI image-makers such as Stable Diffusion. The watchdog group based at Stanford University worked with the Canadian Centre for Child Protection and other anti-abuse charities to identify the illegal material and report the original photo links to law enforcement. More than 1,000 of the suspected images were confirmed as child sexual abuse material.
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (1.00)
UK school pupils 'using AI to create indecent imagery of other children'
Children in British schools are using artificial intelligence (AI) to make indecent images of other children, a group of experts on child abuse and technology has warned. They said that a number of schools were reporting for the first time that pupils were using AI-generating technology to create images of children that legally constituted child sexual abuse material. Emma Hardy, UK Safer Internet Centre (UKSIC) director, said the pictures were "terrifyingly" realistic. "The quality of the images that we're seeing is comparable to professional photos taken annually of children in schools up and down the country," said Hardy, who is also the Internet Watch Foundation communications director. "The photo-realistic nature of AI-generated imagery of children means sometimes the children we see are recognisable as victims of previous sexual abuse. "Children must be warned that it can spread across the internet and end up being seen by strangers and sexual predators.
Could AI-Generated Porn Help Protect Children?
Now that generative AI models can produce photorealistic, fake images of child sexual abuse, regulators and child safety advocates are worried that an already-abhorrent practice will spiral further out of control. But lost in this fear is an uncomfortable possibility--that AI-generated child pornography could actually benefit society in the long run by providing a less harmful alternative to the already-massive market for images of child sexual abuse. The growing consensus among scientists is that pedophilia is biological in nature, and that keeping pedophilic urges at bay can be incredibly difficult. "What turns us on sexually, we don't decide that--we discover that," said psychiatrist Dr. Fred Berlin, director of the Johns Hopkins Sex and Gender Clinic and an expert on paraphilic disorders. "It's not because [pedophiles have] chosen to have these kinds of urges or attractions. They've discovered through no fault of their own that this is the nature of what they're afflicted with in terms of their own sexual makeup … We're talking about not giving into a craving, a craving that is rooted in biology, not unlike somebody who's having a craving for heroin."
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
Artificial intelligence could help 'normalize' child sexual abuse as graphic images erupt online: experts
Fox News correspondent Grady Trimble has the latest on fears the technology will spiral out of control on'Special Report.' Artificial intelligence is opening the door to a disturbing trend of people creating realistic images of children in sexual settings, which could increase the number of cases of sex crimes against kids in real life, experts warn. AI platforms that can mimic human conversation or create realistic images exploded in popularity late last year into 2023 following the release of chatbot ChatGPT, which served as a watershed moment for the use of artificial intelligence. As the curiosity of people across the world was piqued by the technology for work or school tasks, others have embraced the platforms for more nefarious purposes. The National Crime Agency, which is the UK's lead agency combating organized crime, warned this week that the proliferation of machine-generated explicit images of children is having a "radicalizing" effect "normalizing" pedophilia and disturbing behavior against kids.
- North America > United States (0.17)
- Europe > United Kingdom > Northern Ireland (0.05)
- Media (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.42)
Homeland Security to explore using AI to detect fentanyl shipments
Homeland Security Secretary Alejandro Mayorkas on Friday announced the creation of a task force that will assess the ways artificial intelligence can be used to detect shipments of dangerous fentanyl to the U.S., screen cargo and take on other tasks aimed at shoring up U.S. national security. "I am directing the creation of our department's first Artificial Intelligence Task Force that will drive specific applications of AI to advance our critical homeland security missions," Mayorkas said Friday. "Countering the multi-faceted threat posed by the PRC, learning from major cyber incidents, and harnessing the power of AI to advance our security will draw on the entirety of the capabilities and expertise the 260,000 personnel of DHS bring to bear every single day," he said. "It will require continued investment in our operational cohesion, our ability to work together in ways our founders never imagined." HAWLEY GRILLS MAYORKAS ON REPORTS OF FORCED CHILD MIGRANT LABOR: 'WHY SHOULDN'T YOU BE IMPEACHED FOR THIS?' Homeland Security Secretary Alejandro Mayorkas said Friday his department is looking at how to harness AI to boost U.S. national security.
Independent research firm sued by Apple now wants to help vet the phone maker's child sexual abuse scanning system
Now, Apple's software that looks at phones for evidence of child pornography has created a new need for security research, according to Apple. Other large tech companies, such as Facebook and Microsoft, scan their servers for child porn using a software product called PhotoDNA, developed by Microsoft and Dartmouth professor Hani Farid. The software relies on a database of known child pornography maintained by the National Center for Missing and Exploited Children. If a photo on a company server matches the database, it is flagged and authorities are notified. Companies have employed that system on their servers, and not on devices owned by their customers.
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.85)
- Health & Medicine > Therapeutic Area > Pediatrics/Neonatology (0.40)
- Information Technology > Communications > Social Media (0.63)
- Information Technology > Communications > Mobile (0.40)
- Information Technology > Artificial Intelligence > Vision (0.40)
Artificial intelligence could be used to catch paedophiles prowling on the web
Artificial intelligence could be used to help catch paedophiles operating on the dark web. The technology would target the most dangerous and sophisticated offenders in efforts to tackle child sexual abuse, the Home Office said. Earlier this month Chancellor Sajid Javid announced £30 million would be set aside to tackle online child sexual exploitation. The Government has pledged to spend more money on the Child Abuse Image Database (CAID), which since 2014 has allowed police and other law enforcement agencies to search seized computers and other devices for indecent images of children quickly against a record of 14 million images to help identify victims. The investment will be used to consider whether adding aspects of artificial intelligence (AI) to the system to analyse voices and estimate ages would help in tracking down child abusers.
- Europe > United Kingdom (1.00)
- Africa > Ethiopia (0.07)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Communications > Web (0.45)